88 research outputs found

    Spatial audio in small display screen devices

    Get PDF
    Our work addresses the problem of (visual) clutter in mobile device interfaces. The solution we propose involves the translation of technique-from the graphical to the audio domain-for expliting space in information representation. This article presents an illustrative example in the form of a spatialisedaudio progress bar. In usability tests, participants performed background monitoring tasks significantly more accurately using this spatialised audio (a compared with a conventional visual) progress bar. Moreover, their performance in a simultaneously running, visually demanding foreground task was significantly improved in the eye-free monitoring condition. These results have important implications for the design of multi-tasking interfaces for mobile devices

    Embodied Musical Interaction

    Get PDF
    Music is a natural partner to human-computer interaction, offering tasks and use cases for novel forms of interaction. The richness of the relationship between a performer and their instrument in expressive musical performance can provide valuable insight to human-computer interaction (HCI) researchers interested in applying these forms of deep interaction to other fields. Despite the longstanding connection between music and HCI, it is not an automatic one, and its history arguably points to as many differences as it does overlaps. Music research and HCI research both encompass broad issues, and utilize a wide range of methods. In this chapter I discuss how the concept of embodied interaction can be one way to think about music interaction. I propose how the three “paradigms” of HCI and three design accounts from the interaction design literature can serve as a lens through which to consider types of music HCI. I use this conceptual framework to discuss three different musical projects—Haptic Wave, Form Follows Sound, and BioMuse

    The Role of Auditory Features Within Slot-Themed Social Casino Games and Online Slot Machine Games

    Get PDF
    Over the last few years playing social casino games has become a popular entertainment activity. Social casino games are offered via social media platforms and mobile apps and resemble gambling activities. However, social casino games are not classified as gambling as they can be played for free, outcomes may not be determined by chance, and players receive no monetary payouts. Social casino games appear to be somewhat similar to online gambling activities in terms of their visual and auditory features, but to date little research has investigated the cross over between these games. This study examines the auditory features of slot-themed social casino games and online slot machine games using a case study design. An example of each game type was played on three separate occasions during which, the auditory features (i.e., music, speech, sound effects, and the absence of sound) within the games were logged. The online slot-themed game was played in demo mode. This is the first study to provide a qualitative account of the role of auditory features within a slot-themed social casino game and an online slot machine game. Our results found many similarities between how sound is utilised within the two games. Therefore the sounds within these games may serve functions including: setting the scene for gaming, creating an image, demarcating space, interacting with visual features, prompting players to act, communicating achievements to players, providing reinforcement, heightening player emotions and the gaming experience. As a result this may reduce the ability of players to make a clear distinction between these two activities, which may facilitate migration between game

    Semantic Object Prediction and Spatial Sound Super-Resolution with Binaural Sounds

    Full text link
    Humans can robustly recognize and localize objects by integrating visual and auditory cues. While machines are able to do the same now with images, less work has been done with sounds. This work develops an approach for dense semantic labelling of sound-making objects, purely based on binaural sounds. We propose a novel sensor setup and record a new audio-visual dataset of street scenes with eight professional binaural microphones and a 360 degree camera. The co-existence of visual and audio cues is leveraged for supervision transfer. In particular, we employ a cross-modal distillation framework that consists of a vision `teacher' method and a sound `student' method -- the student method is trained to generate the same results as the teacher method. This way, the auditory system can be trained without using human annotations. We also propose two auxiliary tasks namely, a) a novel task on Spatial Sound Super-resolution to increase the spatial resolution of sounds, and b) dense depth prediction of the scene. We then formulate the three tasks into one end-to-end trainable multi-tasking network aiming to boost the overall performance. Experimental results on the dataset show that 1) our method achieves promising results for semantic prediction and the two auxiliary tasks; and 2) the three tasks are mutually beneficial -- training them together achieves the best performance and 3) the number and orientations of microphones are both important. The data and code will be released to facilitate the research in this new direction.Comment: Project page: https://www.trace.ethz.ch/publications/2020/sound_perception/index.htm

    Multiple sclerosis outpatient future groups: improving the quality of participant interaction and ideation tools within service improvement activities

    Get PDF
    BackgroundImproving the patient experience is a key focus within the National Health Service. This has led us to consider how health services are experienced, from both staff and patient perspectives. Novel service improvement activities bring staff and patients together to use design-led methods to improve how health services are delivered. The Multiple Sclerosis Outpatient Future Group study aimed to explore how analogies and props can be used to facilitate rich interactions between staff and patients within these activities. This paper will consider how these interactions supported participants to share experiences, generate ideas and suggest service improvements. MethodQualitative explorative study using ‘future groups,’ a reinterpretation of the recognised focus groups method directed towards exploring future alternatives through employing analogies and physical props to engage participants to speculate about future service interactions and health experiences. Participants were people with multiple sclerosis (PwMS) and outpatient staff: staff nurses, nursing assistants, junior sisters and reception staff. ResultsUse of future groups, analogies and physical props enabled PwMS and outpatient staff to invest their own ideas and feelings in the service improvement activity and envisage alternative health care scenarios. The combination of participants in the groups with their diverse perspectives and knowledge of the service led to a collaborative approach in which staff highlighted potential practical problems and patients ensured ideas were holistic. Service improvements were prototyped and tested in the outpatient clinic. ConclusionDesign-led methods such as future groups using analogies and physical props can be used to facilitate interactions between staff and patients in service improvement activities, leading to the generation of meaningful ideas. It is hoped that improving the quality of ideation tools within design-led methods can contribute to developing successful service interventions in service improvement activities. <br/

    When Ears Drive Hands: The Influence of Contact Sound on Reaching to Grasp

    Get PDF
    Background Most research on the roles of auditory information and its interaction with vision has focused on perceptual performance. Little is known on the effects of sound cues on visually-guided hand movements. Methodology/Principal Findings We recorded the sound produced by the fingers upon contact as participants grasped stimulus objects which were covered with different materials. Then, in a further session the pre-recorded contact sounds were delivered to participants via headphones before or following the initiation of reach-to-grasp movements towards the stimulus objects. Reach-to-grasp movement kinematics were measured under the following conditions: (i) congruent, in which the presented contact sound and the contact sound elicited by the to-be-grasped stimulus corresponded; (ii) incongruent, in which the presented contact sound was different to that generated by the stimulus upon contact; (iii) control, in which a synthetic sound, not associated with a real event, was presented. Facilitation effects were found for congruent trials; interference effects were found for incongruent trials. In a second experiment, the upper and the lower parts of the stimulus were covered with different materials. The presented sound was always congruent with the material covering either the upper or the lower half of the stimulus. Participants consistently placed their fingers on the half of the stimulus that corresponded to the presented contact sound. Conclusions/Significance Altogether these findings offer a substantial contribution to the current debate about the type of object representations elicited by auditory stimuli and on the multisensory nature of the sensorimotor transformations underlying action

    Timbre from Sound Synthesis and High-level Control Perspectives

    Get PDF
    International audienceExploring the many surprising facets of timbre through sound manipulations has been a common practice among composers and instrument makers of all times. The digital era radically changed the approach to sounds thanks to the unlimited possibilities offered by computers that made it possible to investigate sounds without physical constraints. In this chapter we describe investigations on timbre based on the analysis by synthesis approach that consists in using digital synthesis algorithms to reproduce sounds and further modify the parameters of the algorithms to investigate their perceptual relevance. In the first part of the chapter timbre is investigated in a musical context. An examination of the sound quality of different wood species for xylophone making is first presented. Then the influence of instrumental control on timbre is described in the case of clarinet and cello performances. In the second part of the chapter, we mainly focus on the identification of sound morphologies, so called invariant sound structures responsible for the evocations induced by environmental sounds by relating basic signal descriptors and timbre descriptors to evocations in the case of car door noises, motor noises, solid objects, and their interactions
    corecore